Goto

Collaborating Authors

 polyhedral loss




Surrogate Regret Bounds for Polyhedral Losses

Neural Information Processing Systems

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.




Surrogate Regret Bounds for Polyhedral Losses

Neural Information Processing Systems

Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization.


Surrogate Regret Bounds for Polyhedral Losses

Neural Information Processing Systems

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates.


Reviews: An Embedding Framework for Consistent Polyhedral Surrogates

Neural Information Processing Systems

This work considers the relationship between convex surrogate loss and learning problem such as classification and ranking. The authors prove that this approach is equivalent, in a strong sense, to working with polyhedral (piecewise linear convex) losses, and give a construction of a link function through which L is a consistent surrogate for the loss it embeds. Some examples are presented to verify the theoretical analysis. This is an interesting direction in learning theory, while I have some concerns as follows: 1) What's the motivation of polyhedral losses? The authors should present some real applications and shows its importance, especially for some new learning problems and settings.


Surrogate Regret Bounds for Polyhedral Losses

Neural Information Processing Systems

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates.


Surrogate Regret Bounds for Polyhedral Losses

Frongillo, Rafael, Waggoner, Bo

arXiv.org Machine Learning

Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.